control function
- North America > United States (0.05)
- North America > Canada > Quebec > Montreal (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Greater Manchester > Manchester (0.04)
- Europe > France (0.04)
- Europe > Belgium (0.04)
- North America > United States > California > Orange County > Irvine (0.04)
Supplementary Material for VDE and GCFN A Theoretical Details and Proofs Notation We use the expectation operator in different contexts in the proof
We use the expectation operator in different contexts in the proof. Here, we show the full derivation of the lower bound for negative mutual-information. We derive the lower bound for the general case where there are both observed and unobserved confounders. The VDE optimization involves the expectations of distributions with parameters with respect to a distribution that also has parameters. In our experiments, we let the control function be a categorical variable.
- North America > United States > New York (0.04)
- North America > United States > California (0.04)
- North America > Greenland (0.04)
- (2 more...)
Dependent Reachable Sets for the Constant Bearing Pursuit Strategy
Makkapati, Venkata Ramana, Vechalapu, Tulasi Ram, Comandur, Vinodhini, Hutchinson, Seth
This paper introduces a novel reachability problem for the scenario where one agent follows another agent using the constant bearing pursuit strategy, and analyzes the geometry of the reachable set of the follower. Key theoretical results are derived, providing bounds for the associated dependent reachable set. Simulation results are presented to empirically establish the shape of the dependent reachable set. In the process, an original optimization problem for the constant bearing strategy is formulated and analyzed.
- North America > United States > New York > Nassau County > Mineola (0.04)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- North America > United States > Mississippi (0.04)
- (3 more...)
- North America > United States > New York (0.04)
- North America > United States > California (0.04)
- North America > Greenland (0.04)
- (2 more...)
- North America > United States > California (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Greater Manchester > Manchester (0.04)
Control Disturbance Rejection in Neural ODEs
Bayram, Erkan, Belabbas, Mohamed-Ali, Başar, Tamer
In this paper, we propose an iterative training algorithm for Neural ODEs that provides models resilient to control (parameter) disturbances. The method builds on our earlier work Tuning without Forgetting-and similarly introduces training points sequentially, and updates the parameters on new data within the space of parameters that do not decrease performance on the previously learned training points-with the key difference that, inspired by the concept of flat minima, we solve a minimax problem for a non-convex non-concave functional over an infinite-dimensional control space. We develop a projected gradient descent algorithm on the space of parameters that admits the structure of an infinite-dimensional Banach subspace. We show through simulations that this formulation enables the model to effectively learn new data points and gain robustness against control disturbance.
- North America > United States > Illinois > Champaign County > Urbana (0.14)
- North America > United States > New York (0.04)
A Appendix
A.1 Proof of Proposition 3.2 First, we consider the solution of Eq. (9) for u( x) = kx . Thus, from the property of the martingale and It ˆ o's isometry formula, it follows that Eη (t) = Eη (0) = 0, Eη (t) So, to satisfy Condition (iii) in Theorem 2.2, we have to set Therefore, the exponential stability of the zero solution is assured. Now, applying Gronwall's inequality, we get E[ x (t) This therefore completes the proof of the whole theorem. A.3.2 Proof of Theorem 4.2 First we prove the estimation for E[ τ Applying It ˆ o's formula to log V (x) yields: log V ( x( t)) = log V (x Then, similar to the procedure for the energy cost in A.3.1, we can get that E[ x (t) Here we explain this term in more detail. The training for ES framework is not as efficient as AS.